12 research outputs found

    Conditional Lower Bounds for Dynamic Geometric Measure Problems

    Get PDF

    Efficiently stabbing convex polygons and variants of the Hadwiger-Debrunner (p,q)(p, q)-theorem

    Full text link
    Hadwiger and Debrunner showed that for families of convex sets in Rd\mathbb{R}^d with the property that among any pp of them some qq have a common point, the whole family can be stabbed with pq+1p-q+1 points if pqd+1p \geq q \geq d+1 and (d1)p<d(q1)(d-1)p < d(q-1). This generalizes a classical result by Helly. We show how such a stabbing set can be computed for a family of convex polygons in the plane with a total of nn vertices in O((pq+1)n4/3log8n(loglogn)1/3+np2)O((p-q+1)n^{4/3}\log^{8} n(\log\log n)^{1/3} + np^2) expected time. For polyhedra in R3\mathbb{R}^3, we get an algorithm running in O((pq+1)n5/2log10n(loglogn)1/6+np3)O((p-q+1)n^{5/2}\log^{10} n(\log\log n)^{1/6} + np^3) expected time. We also investigate other conditions on convex polygons for which our algorithm can find a fixed number of points stabbing them. Finally, we show that analogous results of the Hadwiger and Debrunner (p,q)(p,q)-theorem hold in other settings, such as convex sets in Rd×Zk\mathbb{R}^d\times\mathbb{Z}^k or abstract convex geometries

    How Fast Can We Play Tetris Greedily With Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width ww and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n)O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on a board of width w=Θ(n)w=\Theta(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n1/2ϵ)O(n^{1/2-\epsilon}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n1/2log3/2n)O(n^{1/2}\log^{3/2}n) time on boards of width nO(1)n^{O(1)}, matching the lower bound up to a no(1)n^{o(1)} factor.Comment: Correction of typos and other minor correction

    Conditional Lower Bounds for Dynamic Geometric Measure Problems

    Get PDF
    We give new polynomial lower bounds for a number of dynamic measure problems in computational geometry. These lower bounds hold in the Word-RAM model, conditioned on the hardness of either 3SUM, APSP, or the Online Matrix-Vector Multiplication problem [Henzinger et al., STOC 2015]. In particular we get lower bounds in the incremental and fully-dynamic settings for counting maximal or extremal points in R^3, different variants of Klee's Measure Problem, problems related to finding the largest empty disk in a set of points, and querying the size of the i'th convex layer in a planar set of points. We also answer a question of Chan et al. [SODA 2022] by giving a conditional lower bound for dynamic approximate square set cover. While many conditional lower bounds for dynamic data structures have been proven since the seminal work of Patrascu [STOC 2010], few of them relate to computational geometry problems. This is the first paper focusing on this topic. Most problems we consider can be solved in O(n log n) time in the static case and their dynamic versions have only been approached from the perspective of improving known upper bounds. One exception to this is Klee's measure problem in R^2, for which Chan [CGTA 2010] gave an unconditional Ω(n){\Omega}(\sqrt{n}) lower bound on the worst-case update time. By a similar approach, we show that such a lower bound also holds for an important special case of Klee's measure problem in R^3 known as the Hypervolume Indicator problem, even for amortized runtime in the incremental setting.Comment: Improved presentation, improved the reduction for the Hypervolume Indicator problem and added a reduction for dynamic approximate square set cove

    Approximability of (Simultaneous) Class Cover for Boxes

    Full text link
    Bereg et al. (2012) introduced the Boxes Class Cover problem, which has its roots in classification and clustering applications: Given a set of n points in the plane, each colored red or blue, find the smallest cardinality set of axis-aligned boxes whose union covers the red points without covering any blue point. In this paper we give an alternative proof of APX-hardness for this problem, which also yields an explicit lower bound on its approximability. Our proof also directly applies when restricted to sets of points in general position and to the case where so-called half-strips are considered instead of boxes, which is a new result. We also introduce a symmetric variant of this problem, which we call Simultaneous Boxes Class Cover and can be stated as follows: Given a set S of n points in the plane, each colored red or blue, find the smallest cardinality set of axis-aligned boxes which together cover S such that all boxes cover only points of the same color and no box covering a red point intersects a box covering a blue point. We show that this problem is also APX-hard and give a polynomial-time constant-factor approximation algorithm

    Finding the saddlepoint faster than sorting

    Full text link
    A saddlepoint of an n×nn \times n matrix AA is an entry of AA that is a maximum in its row and a minimum in its column. Knuth (1968) gave several different algorithms for finding a saddlepoint. The worst-case running time of these algorithms is Θ(n2)\Theta(n^2), and Llewellyn, Tovey, and Trick (1988) showed that this cannot be improved, as in the worst case all entries of A may need to be queried. A strict saddlepoint of AA is an entry that is the strict maximum in its row and the strict minimum in its column. The strict saddlepoint (if it exists) is unique, and Bienstock, Chung, Fredman, Sch\"affer, Shor, and Suri (1991) showed that it can be found in time O(nlogn)O(n \log{n}), where a dominant runtime contribution is sorting the diagonal of the matrix. This upper bound has not been improved since 1991. In this paper we show that the strict saddlepoint can be found in O(nlogn)O(n \log^{*}{n}) time, where log\log^{*} denotes the very slowly growing iterated logarithm function, coming close to the lower bound of Ω(n)\Omega(n). In fact, we can also compute, within the same runtime, the value of a non-strict saddlepoint, assuming one exists. Our algorithm is based on a simple recursive approach, a feasibility test inspired by searching in sorted matrices, and a relaxed notion of saddlepoint.Comment: To be presented at SOSA 202

    Finding the saddlepoint faster than sorting

    Get PDF
    A saddlepoint of an n×nn \times n matrix AA is an entry of AA that is a maximum in its row and a minimum in its column. Knuth (1968) gave several different algorithms for finding a saddlepoint. The worst-case running time of these algorithms is Θ(n2)\Theta(n^2), and Llewellyn, Tovey, and Trick (1988) showed that this cannot be improved, as in the worst case all entries of A may need to be queried. A strict saddlepoint of AA is an entry that is the strict maximum in its row and the strict minimum in its column. The strict saddlepoint (if it exists) is unique, and Bienstock, Chung, Fredman, Sch\"affer, Shor, and Suri (1991) showed that it can be found in time O(nlogn)O(n \log{n}), where a dominant runtime contribution is sorting the diagonal of the matrix. This upper bound has not been improved since 1991. In this paper we show that the strict saddlepoint can be found in O(nlogn)O(n \log^{*}{n}) time, where log\log^{*} denotes the very slowly growing iterated logarithm function, coming close to the lower bound of Ω(n)\Omega(n). In fact, we can also compute, within the same runtime, the value of a non-strict saddlepoint, assuming one exists. Our algorithm is based on a simple recursive approach, a feasibility test inspired by searching in sorted matrices, and a relaxed notion of saddlepoint

    How Fast Can We Play Tetris Greedily with Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width w and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction from the Multiphase problem [P?tra?cu, 2010] that on a board of width w = ?(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n^{1/2-?}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n^{1/2}log^{3/2}n) time on boards of width n^O(1), matching the lower bound up to an n^o(1) factor

    Effciently Stabbing Convex Polygons and Variants of the Hadwiger-Debrunner (p, q)-Theorem

    No full text
    info:eu-repo/semantics/publishe
    corecore